Department of Family Services – Older Adults

CONTACT INFORMATION: Monday–Friday 8 a.m.–4:30 p.m.
703-324-7948 TTY 711
12011 Government Center Parkway, Suite 708
Fairfax, VA 22035
Trina Mayhan-Webb
Director

Artificial Intelligence and Deepfake Videos: What You Need to Know

Article by Martin Bailey, AARP Community Ambassador, Silver Shield Task Force

(Posted 2024 March)

Golden Gazette newsletter banner graphic


Photo of a computer screen where deepfake technology is being used to swap out one person's face for another.Until recently, it was difficult to create a fake video that was convincing without considerable time and expertise. With the evolving capabilities of Artificial Intelligence (AI), that is no longer the case. AI generated videos, called deepfakes, are challenging our ability to tell what is real and what is fake, empowering bad actors to spread misinformation. Deepfakes are also being used by scammers to steal money and personal information. 

What is a Deepfake?

According to the University of Virginia Information Security office, a deepfake is an artificial image or video (a series of images) created using deep learning, a subset of machine learning, a branch of AI that configures computers to perform tasks through experience. 

The process of producing complex deepfakes involves two algorithms. One algorithm is trained to produce the best fake replicas possible of real images. The other model is trained to detect when an image is fake and when it’s not. The two models iterate back and forth, each getting better at their respective task. By pitting models against each other, you end up with a model that’s extremely adept at producing fake images; so adept, in fact, that humans often can’t tell that the output is fake. This is what makes deepfakes dangerous. 

Uses of Deepfakes

The main purpose of a deepfake is to influence people into believing something happened that didn’t. Anyone with the capability to create deepfakes can release misinformation and influence us to behave in a way that will advance their agenda in some way. Deepfake-based misinformation could wreak havoc on both a small and large scale. 

Although deepfakes can (and have) been used for humor and entertainment purposes, according to NortonLifeLock they can be used for a number of malicious purposes, including:

  • Phishing scams
  • Data breaches
  • Hoaxes
  • Celebrity pornography
  • Reputation smearing
  • Election manipulation
  • Social engineering
  • Automated disinformation attacks
  • Identity theft
  • Financial fraud
  • Blackmail 

How to Spot a Deepfake

Although deepfakes are often convincing, there are ways to spot them. Here are 15 things to look for if you think a video is fake, according to the cybersecurity firm NortonLifeLock:

  1. Unnatural eye movement. Eye movements that do not look natural — or a lack of eye movement, such as a lack of blinking — are red flags. 
  2. Unnatural facial expressions. When something doesn’t look right about a face, it could signal facial morphing. 
  3. Awkward facial-feature positioning. If someone’s face is pointing one way and their nose is pointing another, you should be skeptical.
  4. A lack of emotion. You may be able to spot facial morphing if someone’s face doesn’t seem to exhibit the emotion that should go along with what they’re saying.
  5. Awkward-looking body or posture. If a person’s body shape doesn’t look natural or there is awkward or inconsistent positioning of the head or body. 
  6. Unnatural body movement. If someone looks distorted or off when they turn to the side or move their head, or their movements are jerky and disjointed from one frame to the next.
  7. Unnatural coloring. Abnormal skin tone, discoloration, weird lighting, and misplaced shadows are all signs of a fake.
  8. Hair that doesn’t look real. You won’t see frizzy or flyaway hair because fake images won’t be able to generate these individual characteristics.
  9. Teeth that don’t look real. Algorithms may not be able to generate individual teeth, so an absence of outlines of individual teeth could be a clue.
  10. Blurring or misalignment. If the edges of images are blurry or visuals are misaligned (e.g. where someone’s face and neck meet their body.)
  11. Inconsistent noise or audio. Look for poor lip-syncing, robotic-sounding voices, strange word pronunciation, digital background noise, or even the absence of audio.
  12. Images that look unnatural when slowed down. If you watch a video on a screen that’s larger than your smartphone or have video-editing software that can slow down a video’s playback, you can zoom in and examine images more closely. Zoom in on lips to see if they’re really talking or if it’s bad lip-syncing. 
  13. Hashtag discrepancies. There’s a cryptographic algorithm that helps video creators show that their videos are authentic. The algorithm is used to insert hashtags at certain places throughout a video. If the hashtags change, then you should suspect the video has been manipulated.
  14. Digital fingerprints. Blockchain technology can also create a digital fingerprint for videos. While not foolproof, this blockchain-based verification can help establish a video’s authenticity. Here’s how it works. When a video is created, the content is registered to a ledger that can’t be changed. This technology can help prove the authenticity of a video.
  15. Reverse image searches. A search for an original image, or a reverse image search using a computer, can unearth similar videos online to help determine if an image, audio, or video has been altered. While reverse video search technology is not publicly available yet, investing in a tool like this could be helpful.

How to Verify that a Photo or Video is Authentic  

When checking authenticity of photos or video we must take several factors into consideration. MUO, (Make use of), one of the largest online technology publications on the web has provided a few tricks to help us distinguish between AI-generated images and real ones.

Check the Source

This can be hard to do when a video or photo goes viral on social media, but it’s essential if you want to know if you’re looking at a fake. Try to trace the photo back to the original post to see who shared the image or video and why they shared it. If it shows shocking political events or messages, question why it is on social media but not mainstream media. If mainstream media is hesitant to pick up a story, it probably means that it is fake news corroborated by AI-generated images and videos.

Besides the title, description, and comments section, also check the original poster’s profile page to look for clues. Keywords like Midjourney or DALL-E, the names of two popular AI art generators, are enough to let you know that the images you're looking at could be AI-generated.

Look for a Watermark

Another important clue for identifying an AI-generated image is a watermark. DALL-E 2 places one on every photo you download from its site, though it may not be obvious at first. You can find it in the bottom right corner of the picture, and it looks like five squares colored yellow, turquoise, green, red, and blue. If you see this watermark on an image you come across, then you can be sure it was created using DALL-E 2. Unfortunately, it's easy to download the same image without a watermark. Midjourney doesn't use watermarks.

Search for Anomalies in the Image

You may not notice them at first, but AI-generated images often share odd visual markers that are most obvious when you take a closer look. Here are a few of the markers that you might find in AI-generated images of faces:

  • Missing or mismatched earrings.
  • A blurred background that looks more like a texture.
  • Text in the background is indistinguishable.
  • Asymmetry in the face (teeth off center, eyes are different sizes).
  • Patches of the photo that look like they have been painted.
  • Objects like glasses blending into the skin.

Remember, scammers continuously evolve their techniques, including the use of AI, so it's crucial to stay informed, exercise caution, and trust your instincts. If you suspect an AI scam, report it to the appropriate authorities or the organization being impersonated to help prevent others from falling victim to the scam.


This article is part of the Golden Gazette monthly newsletter which covers a variety of topics and community news concerning older adults and caregivers in Fairfax County. Are you new to the Golden Gazette? Don’t miss out on future newsletters! Subscribe to get the electronic or free printed version mailed to you. Have a suggestion for a topic? Share it in an email or call 703-324-GOLD (4653).


Back to top

Fairfax Virtual Assistant